Endoscopy 2015; 47(08): 667-668
DOI: 10.1055/s-0034-1392251
Editorial
© Georg Thieme Verlag KG Stuttgart · New York

Colonoscopy quality indicators: from individual performance to institutional policy

Marek Bugajski
1   Department of Gastroenterological Oncology, The Maria Sklodowska-Curie Memorial Cancer Centre and Institute of Oncology, Warsaw, Poland
,
Michal F. Kaminski
1   Department of Gastroenterological Oncology, The Maria Sklodowska-Curie Memorial Cancer Centre and Institute of Oncology, Warsaw, Poland
2   Department of Gastroenterology and Hepatology, Medical Center for Postgraduate Education, Warsaw, Poland
3   Institute of Health and Society, University of Oslo, Oslo, Norway
› Institutsangaben
Weitere Informationen

Publikationsverlauf

Publikationsdatum:
30. Juli 2015 (online)

Preview

The quality of colonoscopy is considered key to the efficacy of the examination. The two most widely used quality indicators for colonoscopy – cecal intubation rate (CIR) and adenoma detection rate (ADR) – have been shown to be associated with interval colorectal cancer [1] [2] [3].

The major strength of these two quality indicators is that they can be measured at several levels: the individual endoscopist, the endoscopy center, and the screening program. From the quality improvement perspective, the method used to calculate the CIR and ADR at each level is essentially the same, but the information obtained from each measurement and its significance is very different. At the individual level, underperformance may reveal a need for additional colonoscopy training, whereas at the institutional or program level it may underscore deficiencies in policies. To date, the ADR and CIR have been widely used at the individual level, whereas reports at the center level are relatively infrequent [4] [5] [6] [7].

In this issue of Endoscopy, Belderbos et al. present the results of a prospective audit of routine colonoscopy performance between November 2012 and January 2013 at two academic and five nonacademic medical hospitals in The Netherlands [8]. The audit demonstrated high overall performance, but significant variability in CIR (range 89.4 % – 99.2 %) and ADR (range 24.8 % – 46.8 %) between hospitals, which was independent of the casemix. The authors used raw data on CIR and ADR to develop a colonoscopy quality indicator (CQI), which shows, in the form of a graph, areas of colonoscopy performance that may require further improvement at each participating hospital.

In our view it is uncertain whether the CQI offers advantages over other methods of colonoscopy quality comparison, such as funnel plots of center performance with 95 % confidence intervals proposed by Gavin et al. [4], or simple benchmarking against recommended target values of CIR (90 % for diagnostic and 95 % for screening colonoscopies) and ADR (or 25 % for screening colonoscopies) [9] [10]. Nevertheless, the CQI graph demonstrates areas of underperformance that could have arisen not only from differences in individual endoscopists’ performances but also from differences in institutional policies.

It would be important to precisely understand why the centers performed differently, but unfortunately based on the data provided in the study, this issue can only partially be addressed. Bowel preparation quality at the study centers can largely be attributed to institutional policies. All of the centers used a validated Boston Bowel Preparation Scale, with a score of 6 or more indicating adequate bowel cleansing. There was significant variability in adequate bowel cleansing across the hospitals, ranging from 79.0 % to 97.6 %. Notably, adequate bowel cleansing was significantly associated with CIR and ADR. It is likely that there were differences between the centers in the quality of bowel preparation instructions given to patients and in adopting routine split-dosing regimen policy, both of which have been proven to improve bowel preparation at the center level [11] [12].

Other institutional policies that could have influenced center performance are timing of colonoscopy and appropriate workforce. Although such data were unavailable in the study by Belderbos et al., it has been previously shown that physician fatigue and tight scheduling [13] could play an important role in the quality of colonoscopies performed. Similarly, the design generation of instruments used and other endoscopic equipment such as magnetic endoscope imaging, the availability of which will also depend on institutional policy, have been shown in some studies to be associated with colonoscopy quality [14] [15].

Probably one of the crucial factors for colonoscopy performance at the center level is the institutional approach to colonoscopy training. It starts with the main role of the center lead, availability of well-prepared and dedicated trainers, established training curriculum, and participation in additional endoscopy courses, and ends with appropriate colonoscopy competence assessment methods. It has now been shown that the appropriate training of individuals to become center leads in colonoscopy training improves the overall performance of the entire endoscopy center [16]. Unfortunately, the Belderbos study did not provide details on how training was organized across the hospitals included in the study.

We think that the study by Belderbos et al. has opened doors for scientific discussion on colonoscopy quality assessment at the center level. It demonstrates that colonoscopy quality, measured using the CIR and ADR, varies considerably across hospitals, and proposes a graphical tool to facilitate comparison between centers. Yet, the minimum set of quality indicators to be measured by each endoscopy center remains to be established. The Quality Improvement Committee of the European Society of Gastrointestinal Endoscopy has joined forces with the United European Gastroenterology to define such a minimum set, but this has yet to be completed. These organizations have also started to work towards development of integrated digital reporting systems in order to standardize documentation of endoscopic procedures, which will facilitate quality assessment across centers. Following the implementation of these systems, the next important step will be to understand which institutional policies are responsible for any observed differences in quality indicators across hospitals, and to what extent. Subsequently, efforts need to be made to change the policies in underperforming centers. Finally, the audit loop needs to be closed by assessing whether change in these policies translates into improved outcomes.